An Adaptive Ridge Procedure for L0 Regularization
نویسندگان
چکیده
Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. In this paper we introduce an adaptive ridge procedure (AR), where iteratively weighted ridge problems are solved whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. After introducing AR its specific shrinkage properties are studied in the particular case of orthogonal linear regression. Based on extensive simulations for the non-orthogonal case as well as for Poisson regression the performance of AR is studied and compared with SCAD and adaptive LASSO. Furthermore an efficient implementation of AR in the context of least-squares segmentation is presented. The paper ends with an illustrative example of applying AR to analyze GWAS data.
منابع مشابه
The Florida State University College of Arts and Sciences Theories on Group Variable Selection in Multivariate Regression Models
We study group variable selection on multivariate regression model. Group variable selection is selecting the non-zero rows of coefficient matrix, since there are multiple response variables and thus if one predictor is irrelevant to estimation then the corresponding row must be zero. In a high dimensional setup, shrinkage estimation methods are applicable and guarantee smaller MSE than OLS acc...
متن کاملA new method for 3-D magnetic data inversion with physical bound
Inversion of magnetic data is an important step towards interpretation of the practical data. Smooth inversion is a common technique for the inversion of data. Physical bound constraint can improve the solution to the magnetic inverse problem. However, how to introduce the bound constraint into the inversion procedure is important. Imposing bound constraint makes the magnetic data inversion a n...
متن کاملA risk ratio comparison of L0 and L1 penalized regression
In the past decade, there has been an explosion of interest in using l1-regularization in place of l0-regularization for feature selection. We present theoretical results showing that while l1-penalized linear regression never outperforms l0-regularization by more than a constant factor, in some cases using an l1 penalty is infinitely worse than using an l0 penalty. We also compare algorithms f...
متن کاملEfficient Regularized Regression for Variable Selection with L0 Penalty
Variable (feature, gene, model, which we use interchangeably) selections for regression with high-dimensional BIGDATA have found many applications in bioinformatics, computational biology, image processing, and engineering. One appealing approach is the L0 regularized regression which penalizes the number of nonzero features in the model directly. L0 is known as the most essential sparsity meas...
متن کاملThe l0-norm-based Blind Image Deconvolution: Comparison and Inspiration
Single image blind deblurring has been intensively studied since Fergus et al.’s variational Bayes method in 2006. It is now commonly believed that the blurkernel estimation accuracy is highly dependent on the pursed salient edge information from the blurred image, which stimulates numerous l0-approximating blind deblurring methods via kinds of techniques and tricks. This paper, however, focuse...
متن کامل